---
title: GenAI glossary
description: Brief definitions of terms relevant to DataRobot GenAI capabilities.
section_name: Generative AI
maturity: public-preview
platform: cloud-only

---


# GenAI glossary {: #genai-glossary }

The GenAI glossary provides brief definitions of terms relevant to  GenAI capabilities in DataRobot.


#### Chatting {: #chatting data-category=gen-ai }
Sending prompts (and as a result, LLM payloads) to LLM endpoints based on a single [LLM blueprint](#llm-blueprint) and receiving a response from the LLM. In this case, context from previous prompts/responses is sent along with the payload.

#### Chunking {: #chunking data-category=gen-ai }
The action of taking a body of [unstructured text](#unstructured-text) and breaking it up into smaller pieces of unstructured text [(tokens)](#tokens).

#### Citation {: #citation data-category=gen-ai }
The chunks of text from the [vector database](#vector-database) used during the generation of LLM responses.

#### Deploying (from a playground) {: #deploying-from-a-playground data-category=gen-ai }
LLM blueprints and all their associated settings are registered in the Registry and can be deployed with DataRobot's [production suite of products](nxt-registry/index).

#### Embedding {: #embedding data-category=gen-ai }
A numerical (vector) representation of text, or a collection of numerical representations of text. The action of generating embeddings means taking a [chunk](#chunking) of unstructured text and using a text embedding model to convert the text to a numerical representation. The chunk is the input to the embedding model and the embedding is the “prediction” or output of the model.

{% include 'includes/genai/foundational-include.md' %}

{% include 'includes/genai/genai-include.md' %}


#### Large language model (LLM) {: #large-language-model-llm data-category=gen-ai }
An algorithm that uses deep learning techniques and large datasets to understand, summarize, generate, and predict new content.

{% include 'includes/genai/llm-misc-include.md' %}

{% include 'includes/genai/playground-include.md' %}

{% include 'includes/genai/prompt-include.md' %}

#### Retrieval Augmented Generation (RAG) {: #retrieval-augmented-generation-rag data-category=gen-ai }
The process of sending a payload to an LLM that contains the prompt, system prompt, LLM settings, vector database (or subset of vector database), and the LLM returning corresponding text based on this payload. It includes the process of retrieving relevant information from a vector database and sending that along with the prompt, system prompt, and LLM settings to the LLM endpoint to generate a response grounded in the data in the vector database. This operation may optionally also incorporate orchestration to execute a chain of multiple prompts.

{% include 'includes/genai/system-prompt-include.md' %}

{% include 'includes/genai/token-include.md' %}

{% include 'includes/genai/unstructured-include.md' %}

{% include 'includes/genai/vdb-include.md' %}
